How to reduce AWS Lambda cold start time
Introduction
AWS Lambda is one of the most popular services provided by AWS and is a part of every event driven architecture which I have come across so far. AWS Lambda is an event driven service, which means that its execution starts as a response to an event, and it is compatible with all of AWS’ services and more. This gives us a big scope for usage of Lambda.
It is serverless, so we need not pay attention to scaling and resource allocation needs. Another added benefit is we pay for what we use, that is, you only pay for the amount of time the function is running, nothing more than that.
As they always say ‘There are 2 sides of a coin’ the same goes for Lambda, there are downsides of using AWS Lambda as well, the biggest downside is AWS Lambda’s latency in responding to events due to its cold start time, what is cold start time you ask? Cold start time is the time taken for lambda to initialise an environment to execute its handler code in. This initialisation time depends on various factors and can vary accordingly, let’s look at some ways in which we can minimise this time.
Ways to reduce cold start times
Provisioned Concurrency
We use Provisioned Concurrency by using this, a certain amount of lambda function environments are always in a warm up state, so when the lambda function is called then it starts executing faster due to the already present ready environment. This does not charge us as with lambda functions we are only charged with GB of memory usage per 1000 calls. If the number of concurrent calls exceeds the number of warmed up environments then a new environment is cold-started . There is a way to avoid this as well, we can send the lambda function calls to an SQS queue from which the warmed-up environments can one by one execute the functions by using the data in the SQS queue. This will result in only warm-up environments being run and not a new one being started/created.
Warmed up functions
When an environment is cold-started after the execution of the function, the environment stays warm for an uncertain amount of time, during which if another function call occurs then the same warmed-up execution is used by the lambda function for execution. If your function is a kind of function which is called quite frequently in your application then there might always be at least 1 warmed-up environment present all the time for the function to execute.
Memory Allocation
Every lambda function is allocated a certain amount of memory, by default it is 128 MB, but this is only enough for a network bound function and not complex functions. If we increase the memory allocation the compute power goes up, which results in the function executing faster. But this is only effective for functions which are performing compute operations, a function which is performing re-routing tasks or is a network bound function then this is not an effective method as network bound functions don’t need much compute power anyway. As the memory is increased, the costs increase as well. To get a balance between cost and performance, we can test our function using AWS Step Functions and run it under several memory allocations such as 128, 256, 512 MB and see which gives us an ideal performance. AWS also has a tool named Lambda power tuning to help us with this task which will give us results in graphical format as well.
Generally, CPU-bound Lambda functions see the most benefit when memory increases, whereas network-bound see the least. This is because more memory provides greater computational capability, but it does not impact the response time of downstream services in network calls.
Reduced Dependencies
When a lambda function is called, it statically initialises the INIT code. that is, it downloads all dependencies into the environment and keeps them ready to be used by the function code. But if the INIT code has a very large library as a dependency then it will take longer time for the INIT code to be executed, thus it is recommended that we only include specific libraries which are being used in the handler code to keep INIT code as short as possible. IF in a Lambda function the INIT code is very big then we can break the lambda function into smaller lambda functions, making their initialisation faster.
Conclusion
These are 4 ways which I hope help you in achieving your goal of making Lambda function’s execution smoother and faster, but eventually it all boils down to your use case of Lambda functions. Given the amount of AWS services there are, you can always find better ways to achieve your end goal faster than these methods.